Goto

Collaborating Authors

 deep-learning framework


fVDB: A Deep-Learning Framework for Sparse, Large-Scale, and High-Performance Spatial Intelligence

Williams, Francis, Huang, Jiahui, Swartz, Jonathan, Klár, Gergely, Thakkar, Vijay, Cong, Matthew, Ren, Xuanchi, Li, Ruilong, Fuji-Tsang, Clement, Fidler, Sanja, Sifakis, Eftychios, Museth, Ken

arXiv.org Artificial Intelligence

We present fVDB, a novel GPU-optimized framework for deep learning on large-scale 3D data. fVDB provides a complete set of differentiable primitives to build deep learning architectures for common tasks in 3D learning such as convolution, pooling, attention, ray-tracing, meshing, etc. fVDB simultaneously provides a much larger feature set (primitives and operators) than established frameworks with no loss in efficiency: our operators match or exceed the performance of other frameworks with narrower scope. Furthermore, fVDB can process datasets with much larger footprint and spatial resolution than prior works, while providing a competitive memory footprint on small inputs. To achieve this combination of versatility and performance, fVDB relies on a single novel VDB index grid acceleration structure paired with several key innovations including GPU accelerated sparse grid construction, convolution using tensorcores, fast ray tracing kernels using a Hierarchical Digital Differential Analyzer algorithm (HDDA), and jagged tensors. Our framework is fully integrated with PyTorch enabling interoperability with existing pipelines, and we demonstrate its effectiveness on a number of representative tasks such as large-scale point-cloud segmentation, high resolution 3D generative modeling, unbounded scale Neural Radiance Fields, and large-scale point cloud reconstruction.


Deep learning rising in importance within booming AI sector

#artificialintelligence

The application scenarios of China's artificial intelligence-powered deep-learning frameworks will be more diversified and buoyed by open-source platforms and large-scale industrial use, with the cost and application threshold to be further lowered, said Ma Yanjun, general manager of Baidu AI technology ecosystem. Meanwhile, deep-learning frameworks will be integrated and innovated with more frontier industries such as scientific computing, quantum computing and life sciences, Ma said. Deep-learning frameworks, which are used by software developers to build AI applications, have been included in the field of next-generation AI and represent key cutting-edge technology supported by the nation during the 14th Five-Year Plan period (2021-25). As the first open-source deep-learning platform in China, Baidu's PaddlePaddle provides software developers of all skill levels with the tools, services and resources they need to rapidly adopt and implement deep learning at scale. Ma said that at present, an increasing number of developers and companies are pushing forward with applications for intelligent transformation based on domestic deep-learning platforms so as to create solutions targeted at different scenarios in various industries.


What Is Machine Learning?

#artificialintelligence

To learn a skill, we gather knowledge, practice carefully, and monitor our performance. Eventually, we become better at that activity. Machine learning is a technique that allows computers to do just that. We all know what we mean by intelligence when we say it, but describing it is problematic. Leaving aside emotion and self-awareness, a working description could be the ability to learn new skills and absorb knowledge and to apply them to new situations to achieve the desired outcome.


Researchers use deep learning to identify gene regulation at single-cell level

AIHub

Scientists at the University of California, Irvine have developed a new deep-learning framework that predicts gene regulation at the single-cell level. In a study published recently in Science Advances, UCI researchers describe how their deep-learning technique can also be successfully used to observe gene regulation at the cellular level. Until now, that process had been limited to tissue-level analysis. According to co-author Xiaohui Xie, UCI professor of computer science, the framework enables the study of transcription factor binding at the cellular level, which was previously impossible due to the intrinsic noise and sparsity of single-cell data. A transcription factor (TF) is a protein that controls the translation of genetic information from DNA to RNA; TFs regulate genes to ensure they're expressed in proper sequence and at the right time in cells.


Complex Sequential Data Analysis: A Systematic Literature Review of Existing Algorithms

Dandajena, Kudakwashe, Venter, Isabella M., Ghaziasgar, Mehrdad, Dodds, Reg

arXiv.org Machine Learning

This paper provides a review of past approaches to the use of deep-learning frameworks for the analysis of discrete irregular-patterned complex sequential datasets. A typical example of such a dataset is financial data where specific events trigger sudden irregular changes in the sequence of the data. Traditional deep-learning methods perform poorly or even fail when trying to analyse these datasets. The results of a systematic literature review reveal the dominance of frameworks based on recurrent neural networks. The performance of deep-learning frameworks was found to be evaluated mainly using mean absolute error and root mean square error accuracy metrics. Underlying challenges that were identified are: lack of performance robustness, non-transparency of the methodology, internal and external architectural design and configuration issues. These challenges provide an opportunity to improve the framework for complex irregular-patterned sequential datasets.


Deep Java Library: New Deep Learning Toolkit for Java Developers

#artificialintelligence

At the 2019 AWS re:Invent conference, Amazon released Deep Java Library (DJL), an open-source library with Java APIs to simplify training, testing, deploying, and making predictions with deep-learning models. While Java remains the first or second most popular programming language since the late 90s, Python is the most used language for machine learning, with numerous resources and deep-learning frameworks. DJL aims to make deep-learning open-source tools accessible to Java developers, using familiar concepts and intuitive APIs. Java developers can use their favorite IDE with DJL or Jupyter Notebook-based code execution for Java. DJL is framework agnostic; it abstracts away commonly used deep-learning functions, using Java Native Access (JNA) on top of existing deep-learning frameworks, currently providing implementations for Apache MXNet and TensorFlow.



Inside the Chinese lab that plans to rewire the world with AI

MIT Technology Review

The ticket kiosks at Shanghai's frenetic subway station have a mind of their own. Walk up to one and state your destination, and it'll automatically recommend a route before issuing a ticket. In the interest of reducing the rush-hour stampede, the system is set up to let you find information and buy tickets without pushing a button or talking to a person. More impressive still, all this happens successfully in the middle of a crowded, noisy station. Each kiosk has to figure out who is speaking to it; zero in on that person's voice within the crowd; transcribe the incoming speech; parse its meaning; and compare the person's face against a massive database of photos--all within a few seconds.


IBM, Intel Rethink Processor Designs to Accommodate AI Workloads - The New Stack

#artificialintelligence

Artificial intelligence is bringing new demands to processors. The algorithmic data crunching is different from earlier models of processing data highlighted by benchmarks like LINPACK. It is also changing computing architectures by de-emphasizing the CPU and harnessing the faster computing power of coprocessors. The CPU is just a facilitator, and a lot of deep-learning is done on accelerator chips like GPUs, FPGAs and Google's Tensor processing unit. Major hardware companies like IBM, Intel, Nvidia and AMD are embracing the change in architecture and tuning hardware that encourage the creation of artificial neural nets, as envisioned by researchers in 1960s.


Forecasting Waves with Deep Learning ENGINEERING.com

#artificialintelligence

The ocean is indeed a strange place, but Whitman might not have found it quite so confounding if he'd had access to deep learning. This technology is allowing machines to do everything from disease diagnosis to musical composition to playing video games. Now, a team of scientists and engineers at the IBM Research lab in Dublin have set deep learning on that harshest of mistresses: the sea. Their deep-learning framework for simulating ocean waves enables real-time wave condition forecasts for a fraction of the traditional computational cost. How are wave forecasts traditionally calculated?